在许多现实世界中,当不二维测量值时,可能会提供自由旋转3D刚体(例如卫星)的图像观察。但是,图像数据的高维度排除了学习动力学和缺乏解释性的使用,从而降低了标准深度学习方法的有用性。在这项工作中,我们提出了一个物理知识的神经网络模型,以估计和预测图像序列中的3D旋转动力学。我们使用多阶段预测管道实现了这一目标,该管道将单个图像映射到潜在表示同构为$ \ Mathbf {so}(3)$,从潜在对计算角速度,并使用Hamiltonian Motion使用Hamiltonian运动方程来预测未来的潜在状态博学的哈密顿人的代表。我们证明了方法对新的旋转刚体数据集的功效,该数据集具有旋转立方体和矩形棱镜序列,并具有均匀且不均匀的密度。
translated by 谷歌翻译
本文为一组移动机器人提供了一种算法,可以同时学习域上的空间字段,并在空间上分发自己以最佳覆盖。从以前的方法通过集中式高斯过程估算空间场的方法,这项工作利用了覆盖范围问题的空间结构,并提出了一种分散的策略,其中样本通过通过Voronoi分区的边界来建立通信在本地汇总。我们提出了一种算法,每个机器人都通过其自身测量值和Voronoi邻居提供的局部高斯流程运行局部高斯过程,该过程仅在提供足够新颖的信息时才将其纳入单个机器人的高斯过程中。在模拟中评估算法的性能,并与集中式方法进行比较。
translated by 谷歌翻译
合作匪徒问题越来越多地成为其在大规模决策中的应用。然而,对此问题的大多数研究专注于具有完美通信的环境,而在大多数现实世界分布式设置中,通信通常是随机网络,具有任意损坏和延迟。在本文中,我们在三个典型的真实沟通场景下研究了合作匪徒学习,即(a)通过随机时变网络的消息传递,(b)通过随机延迟的网络瞬时奖励共享(c )通过对冲损坏的奖励来传递消息,包括拜占庭式沟通。对于每个环境中的每一个,我们提出了实现竞争性能的分散算法,以及在发生的群体后悔的近乎最佳保证。此外,在具有完美通信的环境中,我们提出了一种改进的延迟更新算法,其优于各种网络拓扑的现有最先进的算法。最后,我们在集团后悔呈现紧密的网络依赖性最低限度。我们所提出的算法很简单,以实现和获得竞争性的经验性能。
translated by 谷歌翻译
用神经网络对物理系统的动力学建模的最新方法强制执行拉格朗日式或哈密顿结构,以改善预测和泛化。但是,当将坐标嵌入高维数据(例如图像)中时,这些方法要么失去解释性,要么只能应用于一个特定示例。我们介绍了一种新的无监督神经网络模型,该模型从图像中学习拉格朗日动态,并具有受益于预测和控制的解释性。该模型在广义坐标上渗透Lagrangian动力学,这些动力学是通过坐标感知的变异自动编码器(VAE)同时学习的。 VAE旨在说明由飞机中多个刚体组成的物理系统的几何形状。通过推断可解释的拉格朗日动力学,该模型学习了物理系统属性,例如动力学和势能,从而可以长期预测图像空间中的动力学和基于能量控制器的合成。
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
Recently, learning-based controllers have been shown to push mobile robotic systems to their limits and provide the robustness needed for many real-world applications. However, only classical optimization-based control frameworks offer the inherent flexibility to be dynamically adjusted during execution by, for example, setting target speeds or actuator limits. We present a framework to overcome this shortcoming of neural controllers by conditioning them on an auxiliary input. This advance is enabled by including a feature-wise linear modulation layer (FiLM). We use model-free reinforcement-learning to train quadrotor control policies for the task of navigating through a sequence of waypoints in minimum time. By conditioning the policy on the maximum available thrust or the viewing direction relative to the next waypoint, a user can regulate the aggressiveness of the quadrotor's flight during deployment. We demonstrate in simulation and in real-world experiments that a single control policy can achieve close to time-optimal flight performance across the entire performance envelope of the robot, reaching up to 60 km/h and 4.5g in acceleration. The ability to guide a learned controller during task execution has implications beyond agile quadrotor flight, as conditioning the control policy on human intent helps safely bringing learning based systems out of the well-defined laboratory environment into the wild.
translated by 谷歌翻译
Double-blind peer review is considered a pillar of academic research because it is perceived to ensure a fair, unbiased, and fact-centered scientific discussion. Yet, experienced researchers can often correctly guess from which research group an anonymous submission originates, biasing the peer-review process. In this work, we present a transformer-based, neural-network architecture that only uses the text content and the author names in the bibliography to atttribute an anonymous manuscript to an author. To train and evaluate our method, we created the largest authorship-identification dataset to date. It leverages all research papers publicly available on arXiv amounting to over 2 million manuscripts. In arXiv-subsets with up to 2,000 different authors, our method achieves an unprecedented authorship attribution accuracy, where up to 95% of papers are attributed correctly. Thanks to our method, we are not only able to predict the author of an anonymous work but we also identify weaknesses of the double-blind review process by finding the key aspects that make a paper attributable. We believe that this work gives precious insights into how a submission can remain anonymous in order to support an unbiased double-blind review process.
translated by 谷歌翻译
Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data -- examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
translated by 谷歌翻译
The use of multilingual language models for tasks in low and high-resource languages has been a success story in deep learning. In recent times, Arabic has been receiving widespread attention on account of its dialectal variance. While prior research studies have tried to adapt these multilingual models for dialectal variants of Arabic, it still remains a challenging problem owing to the lack of sufficient monolingual dialectal data and parallel translation data of such dialectal variants. It remains an open problem on whether the limited dialectical data can be used to improve the models trained in Arabic on its dialectal variants. First, we show that multilingual-BERT (mBERT) incrementally pretrained on Arabic monolingual data takes less training time and yields comparable accuracy when compared to our custom monolingual Arabic model and beat existing models (by an avg metric of +$6.41$). We then explore two continual pre-training methods-- (1) using small amounts of dialectical data for continual finetuning and (2) parallel Arabic to English data and a Translation Language Modeling loss function. We show that both approaches help improve performance on dialectal classification tasks ($+4.64$ avg. gain) when used on monolingual models.
translated by 谷歌翻译
We present a new NLP task and dataset from the domain of the U.S. civil procedure. Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument, accompanied by a detailed analysis of why the argument applies in that case. Since the dataset is based on a book aimed at law students, we believe that it represents a truly complex task for benchmarking modern legal language models. Our baseline evaluation shows that fine-tuning a legal transformer provides some advantage over random baseline models, but our analysis reveals that the actual ability to infer legal arguments remains a challenging open research question.
translated by 谷歌翻译